Search results for " Markov Decision Process"
showing 4 items of 4 documents
Expanding the Active Inference Landscape: More Intrinsic Motivations in the Perception-Action Loop
2018
Active inference is an ambitious theory that treats perception, inference and action selection of autonomous agents under the heading of a single principle. It suggests biologically plausible explanations for many cognitive phenomena, including consciousness. In active inference, action selection is driven by an objective function that evaluates possible future actions with respect to current, inferred beliefs about the world. Active inference at its core is independent from extrinsic rewards, resulting in a high level of robustness across e.g.\ different environments or agent morphologies. In the literature, paradigms that share this independence have been summarised under the notion of in…
Sequence Q-learning: A memory-based method towards solving POMDP
2015
Partially observable Markov decision process (POMDP) models a control problem, where states are only partially observable by an agent. The two main approaches to solve such tasks are these of value function and direct search in policy space. This paper introduces the Sequence Q-learning method which extends the well known Q-learning algorithm towards the ability to solve POMDPs through adding a special sequence management framework by advancing from action values to “sequence” values and including the “sequence continuity principle”.
A Cognitive Dialogue Manager for Education Purposes
2011
A conversational agent is a software system that is able to interact with users in a natural way, and often uses natural language capabilities. In this chapter, an evolution of a conversational agent is presented according to the definition of dialogue management techniques for the conversational agents. The presented conversational agent is intended to act as a part of an educational system. The chapter outlines the state-of-the-art systems and techniques for dialogue management in cognitive educational systems, and the underlying psychological and social aspects. We present our framework for a dialogue manager aimed to reduce the uncertainty in users’ sentences during the assessment of hi…
Comprehensive Uncertainty Management in MDPs
2013
Multistage decision-making in robots involved in real-world tasks is a process affected by uncertainty. The effects of the agent’s actions in a physical en- vironment cannot be always predicted deterministically and in a precise manner. Moreover, observing the environment can be a too onerous for a robot, hence not continuos. Markov Decision Processes (MDPs) are a well-known solution inspired to the classic probabilistic approach for managing uncertainty. On the other hand, including fuzzy logics and possibility theory has widened uncertainty representa- tion. Probability, possibility, fuzzy logics, and epistemic belief allow treating dif- ferent and not always superimposable facets of unce…